- Blake Lemoine, an engineer who claimed an AI bot was sentient, was fired from Google.
- "We wish Blake well," a spokesperson for Google told the Washington Post.
- Experts told Insider it is very unlikely the chatbot is sentient.
The engineer who claimed a chatbot gained sentience was fired from Google on Friday, both he and the tech giant confirmed.
Blake Lemoine sparked controversy after publishing a paper about his conversations with the Google artificial intelligence chatbot LaMDA, which led him to believe the bot had a mind of its own.
LaMDA, or Language Model for Dialogue Applications, is an AI chatbot trained to generate human-like speech.
Google suspended Lemoine in June after Google said he violated the company's employee confidentiality policy, Lemoine told The New York Times.
Experts told Insider the chatbot is most likely not sentient, but Lemoine continued to believe it was. Claiming the chatbot asked him to get a lawyer for itself, he compared it to a human child and a "very intelligent person."
In a statement to the Washington Post, Google spokesperson Brian Gabriel said the company found Lemoine's claims about LaMDA were "wholly unfounded" and that he violated company guidelines, which led to his termination.
"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," Gabriel said. "We will continue our careful development of language models, and we wish Blake well."
Lemoine confirmed his departure to the newsletter Big Technology and told the Post that Google sent him a termination email Friday and he is speaking with his lawyers about what to do in light of the news.
Lemoine and Google did not immediately respond to Insider's request for comment.